There we go. So we were talking about knowledge and learning and after having completed regression
based learning, essential neural nets and linear regression, and I tried to convince you that
knowledge actually might be good in learning because it feels counterintuitive that whenever
you learn you start from zero, from a blank slate. And since we don't really know how to
localize learning or knowledge in say weights in a neural network, a likely way is to change the
learning paradigm to something that can actually look and represent knowledge and that would be
logic. And we've talked a bit about inductive learning and how that actually works with logic.
We can formulate, it turns out, the learning problem easily in logic. The question of course
is can we learn efficiently? How does that work? And the upshot of the whole thing is essentially
that if we can express the stuff we want to learn in terms of logic, which is a big assumption
for motor skills and so on, think about RoboCup or something like that, robot soccer. So most of
the tasks there are motor skills, sensor skills and those kind of things where the big problems
are actually kicking the ball so that it actually goes towards the goal and if you're a humanoid
robot not falling down at the same time. So those are motor skills and it would be very
difficult to express those in logic. So this whole idea of knowledge and learning is really
quite restricted to things that we can express well in logic. But if you can, if you have the
right predicates and so on, then it's very simple. You just basically have to solve an
entailment problem. Essentially if you think of this as an equation, kind of an in equation,
it's directed, but you have a set of examples which we think of as data, we have a set of say
classifications which is the stuff we want to learn and then we have hypotheses and the data
plus the hypothesis should entail the classifications. So that's really what we want, but of course
there's a lot that can go wrong here. So that's the main problem here. So if we go to the restaurant
example again, that's easy to express in logic. Indeed these attributes are very simply logic
things as well and the idea is that with that, with this framework, we can actually do cumulative
learning. And we discussed a couple of examples. One is that when you're in this caveman
example that you actually, when you're trying, when you're learning that advanced techniques,
you're actually building on the fact that you've already learned that say cooked food is good.
The other example we're going to look at is the kind of Brazilian example where you have,
where you meet somebody at the airport. Fernando, you know that Fernando is Brazilian and he speaks
Portuguese. Now what can we learn from this? Well there's two obvious things to learn. One is
that everybody in Brazil speaks Portuguese. It's kind of reasonable. And the other thing you could
learn is that everybody's called Fernando. Not so reasonable. So that's a problem for logic-based
techniques and we'll use this example to see how we can get around this kind of an perceived
asymmetry here. And the third example is things where you have a doctor, she has an intern,
and then the intern kind of observes the doctor. We have, we see that the doctor has a patient,
the patient describes the disease, the doctor says, ask a couple of questions and then says,
why don't you take this and that antibiotic? And the intern is supposed to learn something
from this and probably what the intern learns is that this particular antibiotic is good in
that particular disease. And if the intern sticks around long enough, she might be able to generalize,
saying, oh whenever there's a cough involved, take this or something like this. And the theme
that we're going to use is that we're going to use background knowledge in this kind of an
iterative way, building up a body of knowledge that actually makes subsequent learning easier.
And probably that rings a bell, probably that's something you feel familiar with. Usually you
take AI1 before AI2 because the knowledge builds up, hopefully. Okay, and then we talked about
basically the three kinds of learning we're going to look at. First we're going to look at very
simple explanation-based learning and the kind of learning equations this uses. Then we have
relevance-based learning, which is kind of in parallel to the three examples I had before. So
this is in a way caveman learning. Here we have Brazil languages learning and we're going to use
inductive logic programming for kind of learning general rules from examples. Okay, any questions
so far? Good, that gives me the time to discuss evaluations. As I was very late, there were only
Presenters
Zugänglich über
Offener Zugang
Dauer
01:21:16 Min
Aufnahmedatum
2018-07-04
Hochgeladen am
2018-07-05 07:05:26
Sprache
en-US
Der Kurs baut auf der Vorlesung Künstliche Intelligenz I vom Wintersemester auf und führt diese weiter.
Lernziele und Kompetenzen
Fach- Lern- bzw. Methodenkompetenz
-
Wissen: Die Studierenden lernen grundlegende Repräsentationsformalismen und Algorithmen der Künstlichen Intelligenz kennen.
-
Anwenden: Die Konzepte werden an Beispielen aus der realen Welt angewandt (bungsaufgaben).
-
Analyse: Die Studierenden lernen über die Modellierung in der Maschine menschliche Intelligenzleistungen besser einzuschätzen.